0 bookmark(s) - Sort by: Date ↓ / Title /
An analysis of Large Language Models' (LLMs) vulnerability to prompt injection attacks and potential risks when used in adversarial situations, like on the Internet. The author notes that, similar to the old phone system, LLMs are vulnerable to prompt injection attacks and other security risks due to the intertwining of data and control paths.
This post highlights how the GitHub Copilot Chat VS Code Extension was vulnerable to data exfiltration via prompt injection when analyzing untrusted source code.
First / Previous / Next / Last
/ Page 1 of 0